A Python hands-on guide to understand the principles of generating new knowledge by following logical processes in knowledge graphs. Discusses the limitations of LLMs in structured reasoning compared to the rigorous logical processes needed in certain fields.
Explores the dynamic relationship between language, cognition, and the role of Large Language Models (LLMs) in expanding our understanding of the functional significance of language.
The article discusses the limitations of Large Language Models (LLMs) in planning and self-verification tasks, and proposes an LLM-Modulo framework to leverage their strengths in a more effective manner. The framework combines LLMs with external model-based verifiers to generate, evaluate, and improve plans, ensuring their correctness and efficiency.
"Simply put, we take the stance that LLMs are amazing giant external non-veridical memories that can serve as powerful cognitive orthotics for human or machine agents, if rightly used."
Last week, Anthropic announced a significant breakthrough in our understanding of how large language models work. The research focused on Claude 3 Sonnet, the mid-sized version of Anthropic’s latest frontier model. Anthropic showed that it could transform Claude's otherwise inscrutable numeric representation of words into a combination of ‘features’, many of which can be understood by human beings. The vectors Claude uses to represent words can be understood as the sum of ‘features’—vectors that represent a variety of abstract concepts from immunology to coding errors to the Golden Gate Bridge. This research could prove useful for Anthropic and the broader industry, potentially leading to new tools to detect model misbehavior or prevent it altogether.
"scaling sparse autoencoders has been a major priority of the Anthropic interpretability team, and we're pleased to report extracting high-quality features from Claude 3 Sonnet, 1 Anthropic's medium-sized production model.
We find a diversity of highly abstract features. They both respond to and behaviorally cause abstract behaviors. Examples of features we find include features for famous people, features for countries and cities, and features tracking type signatures in code. Many features are multilingual (responding to the same concept across languages) and multimodal (responding to the same concept in both text and images), as well as encompassing both abstract and concrete instantiations of the same idea (such as code with security vulnerabilities, and abstract discussion of security vulnerabilities)."
"...a feature that activates when Claude reads a scam email (this presumably supports the model’s ability to recognize such emails and warn you not to respond to them). Normally, if one asks Claude to generate a scam email, it will refuse to do so. But when we ask the same question with the feature artificially activated sufficiently strongly, this overcomes Claude's harmlessness training and it responds by drafting a scam email."